webpage optimization

Revision:


Content

Ways to improve the website speed Choose the right hosting provider. Leverage browser caching. Enable keep-alive on the web server. Enable GZIP compression. Avoid landing page redirects whenever possible. Use a content delivery network (CDN). Optimize images. Disable query strings for static resources. Specify a character set. Minify website's scripts. Reduce domain name system (DNS) lookups. Put CSS at the top and JavaScript at the bottom. Fix broken URLs Combine JavaScript and CSS files


Ways to improve the website speed

top

Website speed optimization is the set of strategies and best practices implemented to make a website as fast as possible. Website speed optimization offers other benefits as well, like a higher conversion rate, lower bounce rate, and improved user experience.

There is no silver bullet for improving loading speeds. Some methods may give better results, depending on how the website is currently set up. However, to make sure that a website is performing "optimal", as many optimization techniques as possible should be implemented.

Before making any changes that impact how a site loads and handles content, it's worth auditing its current performance. To start, a free tool like "PageSpeed Insights" can be used. It will assess the core web vitals on mobile (by default) or desktop and let you know if you passed. It will also provide a color-coded score reflecting your site's overall performance, and identify opportunities for improving your score.

Once website speed issues are identified, it's tempting to try to fix everything at once. But this approach is not recommended. Instead, try to prioritize potential fixes based on what matters most to visitors.

For example, if a site takes a significant amount of time to start loading, focus efforts on server-side concerns such as hosting provider problems or DNS issues. This takes priority even if the content on the site also struggles to deliver at speed — because visitors won't stick around to see content if the page itself takes long to load.

Recommendations and diagnostics provided by speed testing tools, like PageSpeed Insights, can also be used to help prioritize optimization efforts.

For example, prioritize reducing main-thread work - the time spent parsing, compiling and executing JS - over avoiding large layout shifts.


Choose the right hosting provider.

top

Every hosting provider offers a different baseline performance out of the box, but it is difficult to compare the performance of a web host's shared plans with more advanced offerings such as dedicated servers.

While several factors could contribute to speed issues, including the geographical location of the provider, their physical infrastructure and the overall bandwidth of their network connection, the type of web hosting a website is using can also impact performance.

Three common types are shared, VPS, and dedicated hosting.

Shared: although shared hosting options are the most cost-effective, they split hosting resources among multiple sites, which lowers overall performance. This will especially cause problems if a site gets spikes in traffic or a consistent amount of high traffic.
VPS: virtual private server (VPS) options logically segment services on a shared physical drive to improve performance but still face speed issues if resource loads are high.
Dedicated: dedicated servers are more expensive than shared or VPS options but will significantly boost your speed, regardless of resource load.


Leverage browser caching.

top

Caching is one of the most critical steps to improve loading times. By enabling browser caching, visitors' browsers are told to store some (or all) of the site's static files on their computers temporarily. Since those visitors don't need to reload the site fully every time they return, loading times should be much faster on subsequent visits.

Caching allows browsers to pre-load some of the content to speed up webpage delivery. Many content management systems (CMS) will automatically cache the most current versions of a site, but it's also possible to extend this caching timeframe through CMS settings. This is especially beneficial for content that doesn't regularly change on the site, like logos, static images, downloadable files, JavaScript files, stylesheets.

If your CMS doesn't offer this feature, then consider installing a caching plugin.

There are plenty of ways to leverage browser caching:

Using .htaccess file

Here are the exact instructions you need to perform:a/ access the CPanel of your hosting account; b/ go to the root folder of the website; c/ open the .htaccess file with a File Editor. If you don't find the file, check whether you are able to view hidden files (the .htaccess file is hidden by default); d/ add the following edits to the bottom of the file; e/ add expire headers with long expiry times; f/ add Cache-Control Headers; g/ unset ETag Headers; h/ do not make any other changes; h/ save the file; i/ rerun the test.

To set the expiry time of resources such as images and CSS files requires a slight modification to your .htaccess file. This is found in the root of your hosting server. You can change the expire headers values to boost performance. Set these values as long as it makes sense for your site - 1 month is typically good enough.

example
            #Customize expires caching start - adjust the period according to your needs
            <IfModule mod_expires.c>
                FileETag MTime Size
                AddOutputFilterByType DEFLATE text/plain text/html text/xml text/css application/xml application/xhtml+xml application/rss+xml application/javascript application/x-javascript
                ExpiresActive On
                ExpiresByType text/html "access 600 seconds"
                ExpiresByType application/xhtml+xml "access 600 seconds"
                ExpiresByType text/css "access 1 month"
                ExpiresByType text/javascript "access 1 month"
                ExpiresByType text/x-javascript "access 1 month"
                ExpiresByType application/javascript "access 1 month"
                ExpiresByType application/x-javascript "access 1 month"
                ExpiresByType application/x-shockwave-flash "access 1 month"
                ExpiresByType application/pdf "access 1 month"
                ExpiresByType image/x-icon "access 1 year"
                ExpiresByType image/jpg "access 1 year"  
                ExpiresByType image/jpeg "access 1 year"
                ExpiresByType image/png "access 1 year"
                ExpiresByType image/gif "access 1 year"
                ExpiresDefault "access 1 month"
            </IfModule>
            #Expires caching end
        

With the above changes, we are setting assets that refresh quickly such as the HTML of your page to expire after 600 seconds (this is because HTML typically changes more frequently). We are also changing such things as the CSS and Javascript to only expire once a month (such files only change if you make changes to your template or plugins).

Set image files to a long expiry time: a month is typically good enough too. This makes sure that the largest media which takes the most time to download are kept on the visitors' machine, and won't have to be downloaded again until a certain time.

Add Cache-Control Headers: the following setting should also be added to the file to set the cache-control headers

.
example
            # BEGIN Cache-Control Headers
            <IfModule mod_expires.c>
            <IfModule mod_headers.c>
                <filesMatch "\.(ico|jpe?g|png|gif|swf)$">
                Header append Cache-Control "public"  
                </filesMatch>
                <filesMatch "\.(css)$">
                Header append Cache-Control "public"
                </filesMatch>
                <filesMatch "\.(js)$">
                Header append Cache-Control "private"
                </filesMatch>
                <filesMatch "\.(x?html?|php)$">
                Header append Cache-Control "private, must-revalidate"
                </filesMatch>
            </IfModule>
            </IfModule>
        

We've already set the expiry dates so we don't have to set it again here.

Unset ETag headers for multi-server sites or CDNs: this is important only if you are using a CDN to serve some of your resources. Etags are headers that are typically constructed using attributes that make them unique to each specific machine hosting a site (technical reason - it uses an MD5 generated by the server, making it unique to the server generating it).

If a website is using a CDN or multiple servers to serve their pages, there is NO guarantee that the same server will be used - therefore the tags will not match when a browser gets the original component from one server and later tries to validate that component on a different server. For this reason, it would be best to UNSET them if you are using multiple servers or a CDN to host your website. This allows the Cache-control headers to actually control the caching rather than the ETags. Given that we've put in settings to control the caching through the Cache-Control headers, the ETags are no longer necessary - so we'll switch them off.

Add this to your .htaccess file to unset them.

example
        
            # Disable ETags
            <IfModule mod_headers.c>
                Header unset ETag
            </IfModule>
            FileETag None
        

Using Plugins.

Tweaking and playing around with the .htaccess file can actually break a site. A small mistake can result in the webserver being unable to parse it and starting to throw blank pages or to throw error 500. A simple plugin that takes care of handling all of this can be installed.

Most plugins that are intended to make pages faster will be doing most of these settings in the background. Besides handling all of the settings to leverage browser caching, these will be able to perform a number of other caching optimizations, such as creating temporary copies (file caching), database optimizations, Memcache and other optimizations which make a site faster overall.

Leverage browser caching in Nginx.

If a website is using Nginx as its server, a different code is needed, because Nginx does not have an .htaccess file. It's still relatively easy to implement this, because a few edits in the conf file of the server need to be performed. The code below need to be added inside of an existing server block in the conf file. This will typically be in /etc/nginx/sites-enabled/default.

code example
            server {
                listen       80;
                server_name  localhost;
            
                location / {
                    root   /usr/share/nginx/html;
                    index  index.html index.htm;
                }
            
                location ~*  \.(jpg|jpeg|png|gif|ico|css|js)$ {
                    expires 365d;
                }
            
                location ~*  \.(pdf)$ {
                    expires 30d;
                }
            }
        

Add Cache-Control Headers for Nginx:

code example
            location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
                expires 90d;
                add_header Cache-Control "public, no-transform";
                }
        

How to cache 3rd party resources.

Most 3rd party scripts and services do not specify a long expiration time. As there is no control over these scripts, it is advicesd to use the MINIMUM 3rd party scripts possible.


Enable keep-alive on the web server.

top

Usually, the browser establishes a connection to its server and uses that to transfer the files it needs to fetch. However, if the server isn't properly configured, users might need to establish new connections for every single file they want to transfer.

That is not an efficient way to load modern websites with dozens and sometimes hundreds of files. To avoid that situation, the web server can be configured to use what's called a “keep-alive” HTTP header or persistent connection.

Keep-alive is usually enabled by default on your origin server. But if this header is disabled, there are a few solutions to turn it on yourself.

Keep-Alive, also known as a persistent connection, is a communication pattern between a server and a client to reduce the HTTP request amount and speed up a web page. When Keep-Alive is turned on, the client and the server agree to keep the connection for subsequent requests or responses open.

By default, HTTP connections close at the end of data transactions. This means that clients create a new connection to request each file of a page and servers close these TCP connections after sending the data. However, if a server needs to respond to multiple HTTP requests simultaneously and serve a single file for each new TCP connection, the site page's load time will be increased. To overcome this issue, website owners need to enable the Keep-Alive header to limit the number of new connections.

By turning the Keep-Alive connection header on, clients can download all the content such as JavaScript, CSS, images, and videos through a single TCP connection instead of sending a different request for each file. Keep-Alive can improve website speed and performance as it maintains an open connection between a client and a server, saving the time needed to serve files. Enabling Keep-Alive has additional benefits, such as:

Reduced CPU and memory usage: using persistent connections will benefit web hosting users. As fewer HTTP requests are generated, it reduces the usage of server resources.
HTTP pipelining: a client can deliver requests via the same TCP connection without waiting for the server to respond.
Modern browser support: many browsers use persistent connections automatically as long as the destination servers support Keep-Alive. A modern browser typically allows six connections per domain.
Increased SEO: enabling Keep-Alive improves site performance, which affects its search engine optimization (SEO) performance.
Reduced network congestion: lowering the number of TCP connections between servers and clients frees up network bandwidth.
Improved SSL/TLS performance: with Keep-Alive, an SSL/TLS connection only opens once, avoiding the need for additional handshakes and improving site latency.

The file you need to prepare before enabling the HTTP Keep-Alive header depends on what server you use and your access privileges:

.htaccess: a directory-level configuration file that can be used to change the functions and features of sites hosted on Apache.
httpd.conf: the main configuration file of Apache. Note that you might not have access to this file if your website runs on shared hosting.
nginx.conf: the main configuration file of the NGINX.

Follow these steps to enable Keep-Alive:

Test the website's speed: to check whether Keep-Alive is enabled on your server, run a website speed test using a tool such as GTMetrix.

Enable Keep-Alive: to enable Keep-Alive, you need to explicitly request it via the HTTP header by accessing .htaccess or the main configuration file of your web server. If you turn on Keep-Alive, the HTTP response header will show "Connection: keep-alive". Four different methods can be used to enable Keep-Alive on your server.

option 1: edit the .htaccess file and add following code:

            <ifModule mod_headers.c>
                Header set Connection keep-alive
            </ifModule>
        

This method should work on most Linux-based shared hosting providers.

option 2: enable Keep-Alive in Apache via httpd.conf file. To locate the httpd.conf file, enter the following command into the command line: "find / -name httpd.conf". The following parameters affect Keep-Alive functionality in Apache, from enabling the persistent connection to defining the idle connection timeout:

KeepAlive : set to "KeepAlive On" to enable the extension or KeepAlive Off to disable it.
MaxKeepAliveRequests : this directive sets the maximum number of user requests the server will maintain during each Keep-Alive connection. Consider setting its value to 50 requests for a single connection. Increase the limit if your server needs to deliver more responses.
KeepAliveTimeout : this value specifies how long the server waits for new requests from a client. It's recommended to keep the idle connection open for five or ten seconds and increase it if required. Setting this value too high may cause a high server load and waste memory resources.

Here is what the configuration should look like:

confugration
            #
            # KeepAlive: Whether or not to allow persistent connections (more than
            # one request per connection). Set to "Off" to deactivate.
            #
            KeepAlive On
            #
            # MaxKeepAliveRequests: The maximum number of requests to allow
            # during a persistent connection. Set to 0 to allow an unlimited amount.
            # We recommend you leave this number high, for maximum performance.
            #
            MaxKeepAliveRequests 50
            #
            # KeepAliveTimeout: Number of seconds to wait for the next request from the
            # same client on the same connection.
            #
            KeepAliveTimeout 10
        

option 3: enable Keep-Alive Header in NGINX. NGINX is a server and a reverse proxy that has Keep-Alive enabled by default. Users can enable it using ngx_http_core_module. Look for the value "keepalive_disable". In most cases, this will be the reason why Keep-Alive is not working.

option 4: enable Keep-Alive in Windows Server (IIS). If you use a Windows-based server, enable the Keep-Alive extension through the command line. The following command will enable it:appcmd set config /section:httpProtocol /allowKeepAlive:true If you wish to disable Keep-Alive, use:appcmd set config /section:httpProtocol /allowKeepAlive:false


Enable GZIP compression.

top

The more you can reduce file sizes without compromising quality, the better your website performance. One of the most robust and reliable compression frameworks is Gzip, but other methods can also deliver reduced file sizes without impacting the user experience.

GZIP is a compression method that enables to reduce the file sizes for several elements within the website. In some cases, simply enabling GZIP compression can reduce the weight of webpages by up to 70%. The smaller a page is, the faster it will generally load. Many web hosts enable GZIP compression for almost all plans out of the box.

Two specific steps define gzip compression:

the gzip compression shrinks the files down to a fraction of their original size. This is achieved by scanning the site's files and finding sequences or redundant information. These sections are then condensed into smaller blocks that take up less server space and improve visitor page load times.
the send and decompression method happening in the visitor's browser. After compressing the files, a gzipped version is delivered to the visitor's browser with a specific gzip header, also known as a response header: "Content-Encoding: gzip". That signals the browser to decompress the original files before displaying them. The process happens in a fraction of a second and is generally invisible to the user, while it benefits the end user massively.

There are a few ways to enable compression for a site's files.

Some web hosts have the option in their control panel or enable it by default, asking to add a few lines of code to your website. Depending on your web hosting provider and the type of control panel they offer, you may have a few options.
Another common way to enable gzip compression is to edit the .htaccess file. The .htaccess file is present on many CMS platforms and custom websites.The most important piece of code you need to place in your .htaccess file is this:

      
        ## Enable gzip compression at HostPapa ##
        <IfModule mod_deflate.c>
            AddOutputFilterByType DEFLATE text/plain
            AddOutputFilterByType DEFLATE text/html
            AddOutputFilterByType DEFLATE text/xml
            AddOutputFilterByType DEFLATE text/css
            AddOutputFilterByType DEFLATE application/xml
            AddOutputFilterByType DEFLATE application/xhtml+xml
            AddOutputFilterByType DEFLATE application/rss+xml
            AddOutputFilterByType DEFLATE application/javascript
            AddOutputFilterByType DEFLATE application/x-javascript
        </IfModule>
        ## End gzip compression ##

Avoid landing page redirects whenever possible.

top

Since mobile devices have overtaken desktop browsers when it comes to overall traffic, it doesn't make much sense to design multiple versions of a website. Instead, a single, mobile-friendly design is needed that scales across all possible resolutions.

It's best to avoid redirects whenever possible, because each redirect is another hoop that users have to jump through, and by reducing them, the site's loading times can be improved.

Redirects send users away from the page they've clicked on to another page. In many cases, redirects are a great way to connect high-ranking, high-traffic pages to newer content, but more redirects mean more load on the server, which can increase loading time.

While it's worth using a redirect initially to keep content views steady, replace old redirects with new content ASAP to keep load times short.


Use a content delivery network (CDN).

top

On most types of hosting (except cloud hosting), a website resides in a single server with a specific location. Every visitor needs to connect to that server in order to load the website, which can lead to bottlenecks.

CDNs are clusters of servers around the world that storecopies of websites. A CDN uses multiple servers to store replications of content across multiple locations. When users visit a site, the CDN chooses the server (or servers) closest to their physical location to optimize content delivery.

For example, a site can be hosted in the US but use a CDN with servers in Latin America, Europe, and the rest of the world. If someone from Brazil tries to visit the site, that CDN will serve the site from its Latin American servers.

This setup provides two advantages: it reduces the load on servers and it translates to lower loading times for international visitors.


Optimize images.

top

Images can make a site more engaging and memorable, but they can also drag down loading times, especially if they're high resolution. According to the "HTTP Archive", the median weight of images on a web page on desktop is over 1,000 KB (!).

Compressing these images before adding them to a site can save precious weight and time. Many photo-editing programs now include “save for web” options that optimize images for websites but there are also free, online options available for compressing common files types such as .JPG, .PNG and .TIFF.

Moreover, the WebP format provides superior lossless and lossy compression for images.

Disable query strings for static resources.

top

Query strings are the suffixes that sometimes can be seen at the end of URLs, starting with special characters such as question marks or ampersands.

example: with query string: yourwebsite.com/style.css?ver=2

example: without query string: yourwebsite.com/style.css

The goal of query strings is to enable the identification of specific versions of an asset or get around caching restrictions. Query strings are important for versioning files. They help the system to separate files within the same file path and avoid caching issues. However, query strings are only important for dynamic resources: these refer to customized content for each visitor, such as a shopping cart, user profile, or login information.

Query strings can be used to “bust” the cache and force the browser to load the latest version of a file. But, query strings usually don't play nicely with CDNs or custom caching solutions. Ideally, the website should be configured to serve the latest versions of any files that it instructs users to cache.


Specify a character set.

top

Character sets are groups of numbers that include representations for every text element on the web.

For example, UTF-8 — which is the most popular set for websites — includes unique numbers for over 10,000 characters. Many websites specify the character set each page uses within their HTML document headers, like this: "Content-Type: text/html; charset=utf-8".

Indicating what character set a site uses can reduce the time it takes for browsers to interpret or parse the rest of its HTML documents.


Minify website's scripts.

top

Most modern websites come with multiple CSS and JavaScript files. Each additional script a site needs to load has an impact on its performance. In most cases, removing that code isn't an option, since it usually enables to add critical features to the site.

What can be done is minify those scripts. Minification is a simple process that involves removing unnecessary characters from the site's code, such as white space characters, line breaks, comments, and extra semicolons, unused functions and variables, longhand CSS selectors that could be shorthand.

To give you an idea of what that looks like, here's a simple CSS code snippet:

#bluetext { font-size: 2em; color: blue; } #redtext { font-size: 1em; color: red; }

after minification:

#bluetext{font-size:2em;color:blue;}#redtext{font-size:1em;color:red;}

By removing the empty spaces within that code, the file size can be reduced. For a few lines of code, that might not have a significant impact, but a modern site has many scripts and minification can make a noticeable difference in loading times.


Reduce domain name system (DNS) lookups.

top

Every time a URL is typed in, the browser needs to look up the IP address it corresponds to through its DNS records. Those queries are usually fast, but if a website includes a lot of resources from third-party websites, DNS lookups can add up quickly.

A website can use several third-party JavaScript libraries hosted on external sites and, in that case, URLs for those libraries need to be mentioned within HTML files. When visitors try to access the site, their browsers would need to look up each URL for those libraries.

The longer it takes your domain name server (DNS) to respond, the longer your time to first byte (TTFB) and the slower your site loads.

Free online tools can determine where your DNS provider ranks compared to other offerings, which in turn helps pinpoint specific performance issues. In some cases, a hosting provider will also supply DNS services, while in others these two functions are separate.

The number of DNS lookups on site can be reduced in several ways, which include:

Hosting third-party resources on your server (or using a CDN);
Deferring loading for non-critical JavaScript files;
Enabling DNS prefetching for a website.


Put CSS at the top and JavaScript at the bottom.

top

"Top" and "bottom" refers to the head and footer sections within HTML documents. When a website is visited, the browser has to load all of the scripts within the head tags before it can render the rest of the page. That's not a problem with CSS, because those files contain the markup language that enables to style a website.

JavaScript usually isn't critical when it comes to rendering pages. The code can be moved to the footer of the HTML document (or just before the end of the body tag) and those functions will still work once the page finishes loading. That practice is called "deferring JavaScript loading". The result is that a website will still work just the same, but it will start rendering faster because browsers don't need to finish loading all that JavaScript before they start showing your content.


Fix broken URLs

top

Setting aside SEO or page performance, broken URLs can affect a website negatively. Having too many broken URLs on a site can waste its "crawl budget", preventing search engine bots from indexing other critical pages and costing visitors in the long run.

Moreover, broken URLs give users the impression that the website isn't properly maintained. This can drive visitors away, increasing the site's bounce rate in a very similar way that slow pages do.

A website should be crawled periodically to make sure all of its URLs are working. This can be easily done using tools such as Screaming Frog's SEO Spider or Google Search Console.

If you find broken URLs on your site, you can fix them manually or use a plugin. You may also want to ensure that your site doesn't have a misconfigured .htaccess file, which can sometimes cause broken links.


Combine JavaScript and CSS files

top

JavaScript and CSS files are among the largest files on a website. They also count as individual HTTP requests. So 5 JS files and 5 CSS files would require a total of 10 HTTP request.

It is possible to combine sets of JavaScript or CSS files and reduce the number of steps required to completely load your site.

Whether you dive into code itself or use a CMS, it's worth considering file combination to increase total speed.

It's recommended to combine your CSS/JS files only in the following cases: your website uses HTTP/1.1; you have a simple website with not too many scripts.

This is because HTTP/1.1 does have limits on how many concurrent requests can be processed, so combining your CSS/JS files can yield performance benefits. Its also worth noting that you can combine your CSS/JS files if you have a relatively simple site with few stylesheets or scripts.